翻訳と辞書
Words near each other
・ "O" Is for Outlaw
・ "O"-Jung.Ban.Hap.
・ "Ode-to-Napoleon" hexachord
・ "Oh Yeah!" Live
・ "Our Contemporary" regional art exhibition (Leningrad, 1975)
・ "P" Is for Peril
・ "Pimpernel" Smith
・ "Polish death camp" controversy
・ "Pro knigi" ("About books")
・ "Prosopa" Greek Television Awards
・ "Pussy Cats" Starring the Walkmen
・ "Q" Is for Quarry
・ "R" Is for Ricochet
・ "R" The King (2016 film)
・ "Rags" Ragland
・ ! (album)
・ ! (disambiguation)
・ !!
・ !!!
・ !!! (album)
・ !!Destroy-Oh-Boy!!
・ !Action Pact!
・ !Arriba! La Pachanga
・ !Hero
・ !Hero (album)
・ !Kung language
・ !Oka Tokat
・ !PAUS3
・ !T.O.O.H.!
・ !Women Art Revolution


Dictionary Lists
翻訳と辞書 辞書検索 [ 開発暫定版 ]
スポンサード リンク

linear discriminant analysis : ウィキペディア英語版
linear discriminant analysis

Linear discriminant analysis (LDA) is a generalization of Fisher's linear discriminant, a method used in statistics, pattern recognition and machine learning to find a linear combination of features that characterizes or separates two or more classes of objects or events. The resulting combination may be used as a linear classifier, or, more commonly, for dimensionality reduction before later classification.
LDA is closely related to analysis of variance (ANOVA) and regression analysis, which also attempt to express one dependent variable as a linear combination of other features or measurements.〔
〕 However, ANOVA uses categorical independent variables and a continuous dependent variable, whereas discriminant analysis has continuous independent variables and a categorical dependent variable (''i.e.'' the class label).〔Analyzing Quantitative Data: An Introduction for Social Researchers, Debra Wetcher-Hendricks, p.288〕 Logistic regression and probit regression are more similar to LDA than ANOVA is, as they also explain a categorical variable by the values of continuous independent variables. These other methods are preferable in applications where it is not reasonable to assume that the independent variables are normally distributed, which is a fundamental assumption of the LDA method.
LDA is also closely related to principal component analysis (PCA) and factor analysis in that they both look for linear combinations of variables which best explain the data. LDA explicitly attempts to model the difference between the classes of data. PCA on the other hand does not take into account any difference in class, and factor analysis builds the feature combinations based on differences rather than similarities. Discriminant analysis is also different from factor analysis in that it is not an interdependence technique: a distinction between independent variables and dependent variables (also called criterion variables) must be made.
LDA works when the measurements made on independent variables for each observation are continuous quantities. When dealing with categorical independent variables, the equivalent technique is discriminant correspondence analysis.〔Abdi, H. (2007) ("Discriminant correspondence analysis." ) In: N.J. Salkind (Ed.): ''Encyclopedia of Measurement and Statistic''. Thousand Oaks (CA): Sage. pp. 270–275.〕〔Perriere, G.; & Thioulouse, J. (2003). "Use of Correspondence Discriminant Analysis to predict the subcellular location of bacterial proteins", ''Computer Methods and Programs in Biomedicine'', 70, 99–105.〕
==LDA for two classes==
Consider a set of observations (also called features, attributes, variables or measurements) for each sample of an object or event with known class ''y''. This set of samples is called the training set. The classification problem is then to find a good predictor for the class ''y'' of any sample of the same distribution (not necessarily from the training set) given only an observation \vec x .〔

LDA approaches the problem by assuming that the conditional probability density functions p(\vec x|y=0) and p(\vec x|y=1) are both normally distributed with mean and covariance parameters \left(\vec \mu_0, \Sigma_0\right) and \left(\vec \mu_1, \Sigma_1\right), respectively. Under this assumption, the Bayes optimal solution is to predict points as being from the second class if the log of the likelihood ratios is below some threshold T, so that;
: (\vec x- \vec \mu_0)^T \Sigma_0^ ( \vec x- \vec \mu_0) + \ln|\Sigma_0| - (\vec x- \vec \mu_1)^T \Sigma_1^ ( \vec x- \vec \mu_1) - \ln|\Sigma_1| \ > \ T
Without any further assumptions, the resulting classifier is referred to as QDA (quadratic discriminant analysis).
LDA instead makes the additional simplifying homoscedasticity assumption (''i.e.'' that the class covariances are identical, so \Sigma_0 = \Sigma_1 = \Sigma) and that the covariances have full rank.
In this case, several terms cancel:
: ^T \Sigma_0^ \vec x = ^T \Sigma_1^ \vec x
:^T ^ \vec = ^ \vec x because \Sigma_i is Hermitian
and the above decision criterion
becomes a threshold on the dot product
: \vec w \cdot \vec x > c
for some threshold constant ''c'', where
:\vec w = \Sigma^ (\vec \mu_1 - \vec \mu_0)
: c = \frac(T- }^T \Sigma_1^ {\vec{\mu_1}})
This means that the criterion of an input \vec x being in a class ''y'' is purely a function of this linear combination of the known observations.
It is often useful to see this conclusion in geometrical terms: the criterion of an input \vec x being in a class ''y'' is purely a function of projection of multidimensional-space point \vec x onto vector \vec w (thus, we only consider its direction). In other words, the observation belongs to ''y'' if corresponding \vec x is located on a certain side of a hyperplane perpendicular to \vec w . The location of the plane is defined by the threshold c.

抄文引用元・出典: フリー百科事典『 ウィキペディア(Wikipedia)
ウィキペディアで「linear discriminant analysis」の詳細全文を読む



スポンサード リンク
翻訳と辞書 : 翻訳のためのインターネットリソース

Copyright(C) kotoba.ne.jp 1997-2016. All Rights Reserved.